Multimodal Engagement Prediction in Multiperson Human–Robot Interaction

نویسندگان

چکیده

The ability to measure the engagement level of humans interacting with robots paves way towards intuitive and safe human-robot interaction. Recent approaches achieve reasonable progress in predicting human physically situated environments. However, estimation is still a challenging problem especially an open-world environment due difficulty creating monitoring variety social cues real-time. Furthermore, interactions may involve group subjects simultaneously robot, which increases prediction complexity. In this paper, we design real-time system for generalization capability. We propose estimate using three-stage approach based on combination learning-based rule-based approaches. Firstly, state-of-the-art deep learning methods are used extract features from input frames. Then, simple neural network focus attention score by incorporating gaze head pose assigning all scene face recognition algorithm. Finally, classification predict state subject initiate/terminate interaction robot. To effectively evaluate our system, access each phase separately. Additionally, use online evaluation study allowed interact freely industrial Our model achieves average 96%, 90%, 93% precision, recall, F-score respectively.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

MMLI: Multimodal Multiperson Corpus of Laughter in Interaction

The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) was to collect multimodal data of laughter with the focus on full body movements and different laughter types. It contains both induced and interactive laughs from human triads. In total we collected 500 laugh episodes of 16 participants. The data consists of 3D body position information, facial tracking, multipl...

متن کامل

Multiperson utility

We approach the problem of preference aggregation by endowing both individuals and coalitions with partially-ordered or incomplete cardinal preferences. Consistency across preferences for coalitions comes in the form of the Extended Pareto Rule: if two disjoint coalitions A and B prefer x to y, then so does the coalition A [ B. The Extended Pareto Rule has important consequences for the social ...

متن کامل

Multimodal People Engagement with iCub

In this paper we present an engagement system for the iCub robot that is able to arouse in human partners a sense of “co-presence” during human-robot interaction. This sensation is naturally triggered by simple reflexes of the robot, that speaks to the partners and gazes the current “active partner” (e.g. the talking partner) during interaction tasks. The active partner is perceived through a m...

متن کامل

Bodily Engagement in Multimodal Interaction: A Basis for a New Design Paradigm?

The creative processes of interaction design operate in terms we generally use for conceptualising human-computer interaction (HCI). Therefore the prevailing design paradigm provides a framework that essentially affects and guides the design process. We argue that the current mainstream design paradigm for multimodal user-interfaces takes human sensory-motor modalities and the related userinter...

متن کامل

Recognizing Planned, Multiperson Action

Multi-person action recognition requires models of structured interaction between people and objects in the world. This paper demonstrates how highly structured, multi-person action can be recognized from noisy perceptual data using visually grounded goal-based primitives and low-order temporal relationships that are integrated in a probabilistic framework. The representation, which is motivate...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2022

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2022.3182469